online portfolio selection
- Asia > Taiwan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness
This work introduces the first small-loss and gradual-variation regret bounds for online portfolio selection, marking the first instances of data-dependent bounds for online convex optimization with non-Lipschitz, non-smooth losses. The algorithms we propose exhibit sublinear regret rates in the worst cases and achieve logarithmic regrets when the data is easy, with per-round time almost linear in the number of investment alternatives. The regret bounds are derived using novel smoothness characterizations of the logarithmic loss, a local norm-based analysis of following the regularized leader (FTRL) with self-concordant regularizers, which are not necessarily barriers, and an implicit variant of optimistic FTRL with the log-barrier.
Regret Bounds for Online Portfolio Selection with a Cardinality Constraint
Online portfolio selection is a sequential decision-making problem in which a learner repetitively selects a portfolio over a set of assets, aiming to maximize long-term return. In this paper, we study the problem with the cardinality constraint that the number of assets in a portfolio is restricted to be at most k, and consider two scenarios: (i) in the full-feedback setting, the learner can observe price relatives (rates of return to cost) for all assets, and (ii) in the bandit-feedback setting, the learner can observe price relatives only for invested assets. We propose efficient algorithms for these scenarios that achieve sublinear regrets. We also provide regret (statistical) lower bounds for both scenarios which nearly match the upper bounds when k is a constant. In addition, we give a computational lower bound which implies that no algorithm maintains both computational efficiency, as well as a small regret upper bound.
Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness
This work introduces the first small-loss and gradual-variation regret bounds for online portfolio selection, marking the first instances of data-dependent bounds for online convex optimization with non-Lipschitz, non-smooth losses. The algorithms we propose exhibit sublinear regret rates in the worst cases and achieve logarithmic regrets when the data is "easy," with per-round time almost linear in the number of investment alternatives. The regret bounds are derived using novel smoothness characterizations of the logarithmic loss, a local norm-based analysis of following the regularized leader (FTRL) with self-concordant regularizers, which are not necessarily barriers, and an implicit variant of optimistic FTRL with the log-barrier.
Reviews: Regret Bounds for Online Portfolio Selection with a Cardinality Constraint
Summary The paper studies the online portfolio selection problem under cardinality constraints and provides two algorithms that achieve sublinear regret. One algorithm handles the full information setting and the other algorithm handles the bandit feedback setting. Furthermore, the paper provides lower bounds for both the full information and bandit feedback settings. The approach that both algorithms take is to split the problem into two learning problems. One problem is to learn the optimal combination of assets and the other problem is to learn the optimal portfolio. To learn the optimal combination of assets a version of either the multiplicative weights algorithm (full information) or exp3 (bandit feedback) is used.
Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness
Tsai, Chung-En, Lin, Ying-Ting, Li, Yen-Huan
This work introduces the first small-loss and gradual-variation regret bounds for online portfolio selection, marking the first instances of data-dependent bounds for online convex optimization with non-Lipschitz, non-smooth losses. The algorithms we propose exhibit sublinear regret rates in the worst cases and achieve logarithmic regrets when the data is "easy," with per-iteration time almost linear in the number of investment alternatives. The regret bounds are derived using novel smoothness characterizations of the logarithmic loss, a local norm-based analysis of following the regularized leader (FTRL) with self-concordant regularizers, which are not necessarily barriers, and an implicit variant of optimistic FTRL with the log-barrier.
- Asia > Taiwan (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Maximum-Likelihood Quantum State Tomography by Soft-Bayes
Lin, Chien-Ming, Hsu, Yu-Ming, Li, Yen-Huan
Quantum state tomography (QST), the task of estimating an unknown quantum state given measurement outcomes, is essential to building reliable quantum computing devices. Whereas computing the maximum-likelihood (ML) estimate corresponds to solving a finite-sum convex optimization problem, the objective function is not smooth nor Lipschitz, so most existing convex optimization methods lack sample complexity guarantees; moreover, both the sample size and dimension grow exponentially with the number of qubits in a QST experiment, so a desired algorithm should be highly scalable with respect to the dimension and sample size, just like stochastic gradient descent. In this paper, we propose a stochastic first-order algorithm that computes an $\varepsilon$-approximate ML estimate in $O( ( D \log D ) / \varepsilon ^ 2 )$ iterations with $O( D^3 )$ per-iteration time complexity, where $D$ denotes the dimension of the unknown quantum state and $\varepsilon$ denotes the optimization error. Our algorithm is an extension of Soft-Bayes to the quantum setup.
- Asia > Taiwan (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
Do Not Trade if You Cannot Predict the Market
Can you predict the market? Yes, you can, and if you cannot do not trade. We will discuss our Trading Manifesto in more detail in a following post. But here is a preview on what we consider a sound basis for trading. Real efficient and scientifically based algotrading should be based on the ability to predict market behavior.
Regret Bounds for Online Portfolio Selection with a Cardinality Constraint
Ito, Shinji, Hatano, Daisuke, Sumita, Hanna, Yabe, Akihiro, Fukunaga, Takuro, Kakimura, Naonori, Kawarabayashi, Ken-Ichi
Online portfolio selection is a sequential decision-making problem in which a learner repetitively selects a portfolio over a set of assets, aiming to maximize long-term return. In this paper, we study the problem with the cardinality constraint that the number of assets in a portfolio is restricted to be at most k, and consider two scenarios: (i) in the full-feedback setting, the learner can observe price relatives (rates of return to cost) for all assets, and (ii) in the bandit-feedback setting, the learner can observe price relatives only for invested assets. We propose efficient algorithms for these scenarios that achieve sublinear regrets. We also provide regret (statistical) lower bounds for both scenarios which nearly match the upper bounds when k is a constant. In addition, we give a computational lower bound which implies that no algorithm maintains both computational efficiency, as well as a small regret upper bound.
Amazon.com: Online Portfolio Selection: Principles and Algorithms (9781482249637): Bin Li, Steven Chu Hong Hoi: Books
Dr. Bin Li received a bachelor's degree in computer science from Huazhong University of Science and Technology, Wuhan, China, and a bachelor's degree in economics from Wuhan University, Wuhan, China, in 2006. He earned a PhD degree from the School of Computer Engineering of Nanyang Technological University, Singapore, in 2013. He completed the CFA Program in 2013 and is currently an associate professor of finance at the Economics and Management School of Wuhan University. Dr. Li was a postdoctoral research fellow at the Nanyang Business School of Nanyang Technological University. His research interests are computational finance and machine learning.
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Health & Medicine > Epidemiology (1.00)